20 research outputs found
Accurate brain-age models for routine clinical MRI examinations
Convolutional neural networks (CNN) can accurately predict chronological age in healthy individuals from structural MRI brain scans. Potentially, these models could be applied during routine clinical examinations to detect deviations from healthy ageing, including early-stage neurodegeneration. This could have important implications for patient care, drug development, and optimising MRI data collection. However, existing brain-age models are typically optimised for scans which are not part of routine examinations (e.g., volumetric T1-weighted scans), generalise poorly (e.g., to data from different scanner vendors and hospitals etc.), or rely on computationally expensive pre-processing steps which limit real-time clinical utility. Here, we sought to develop a brain-age framework suitable for use during routine clinical head MRI examinations. Using a deep learning-based neuroradiology report classifier, we generated a dataset of 23,302 'radiologically normal for age' head MRI examinations from two large UK hospitals for model training and testing (age range = 18-95 years), and demonstrate fast (< 5 seconds), accurate (mean absolute error [MAE] < 4 years) age prediction from clinical-grade, minimally processed axial T2-weighted and axial diffusion-weighted scans, with generalisability between hospitals and scanner vendors (Δ MAE < 1 year). The clinical relevance of these brain-age predictions was tested using 228 patients whose MRIs were reported independently by neuroradiologists as showing atrophy 'excessive for age'. These patients had systematically higher brain-predicted age than chronological age (mean predicted age difference = +5.89 years, 'radiologically normal for age' mean predicted age difference = +0.05 years, p < 0.0001). Our brain-age framework demonstrates feasibility for use as a screening tool during routine hospital examinations to automatically detect older-appearing brains in real-time, with relevance for clinical decision-making and optimising patient pathways.</p
Automated triaging of head MRI examinations using convolutional neural networks
The growing demand for head magnetic resonance imaging (MRI) examinations,
along with a global shortage of radiologists, has led to an increase in the
time taken to report head MRI scans around the world. For many neurological
conditions, this delay can result in increased morbidity and mortality. An
automated triaging tool could reduce reporting times for abnormal examinations
by identifying abnormalities at the time of imaging and prioritizing the
reporting of these scans. In this work, we present a convolutional neural
network for detecting clinically-relevant abnormalities in
-weighted head MRI scans. Using a validated neuroradiology report
classifier, we generated a labelled dataset of 43,754 scans from two large UK
hospitals for model training, and demonstrate accurate classification (area
under the receiver operating curve (AUC) = 0.943) on a test set of 800 scans
labelled by a team of neuroradiologists. Importantly, when trained on scans
from only a single hospital the model generalized to scans from the other
hospital (AUC 0.02). A simulation study demonstrated that our
model would reduce the mean reporting time for abnormal examinations from 28
days to 14 days and from 9 days to 5 days at the two hospitals, demonstrating
feasibility for use in a clinical triage environment.Comment: Accepted as an oral presentation at Medical Imaging with Deep
Learning (MIDL) 202
Labelling imaging datasets on the basis of neuroradiology reports: a validation study
Natural language processing (NLP) shows promise as a means to automate the
labelling of hospital-scale neuroradiology magnetic resonance imaging (MRI)
datasets for computer vision applications. To date, however, there has been no
thorough investigation into the validity of this approach, including
determining the accuracy of report labels compared to image labels as well as
examining the performance of non-specialist labellers. In this work, we draw on
the experience of a team of neuroradiologists who labelled over 5000 MRI
neuroradiology reports as part of a project to build a dedicated deep
learning-based neuroradiology report classifier. We show that, in our
experience, assigning binary labels (i.e. normal vs abnormal) to images from
reports alone is highly accurate. In contrast to the binary labels, however,
the accuracy of more granular labelling is dependent on the category, and we
highlight reasons for this discrepancy. We also show that downstream model
performance is reduced when labelling of training reports is performed by a
non-specialist. To allow other researchers to accelerate their research, we
make our refined abnormality definitions and labelling rules available, as well
as our easy-to-use radiology report labelling app which helps streamline this
process
Cryogenic Memory Architecture Integrating Spin Hall Effect based Magnetic Memory and Superconductive Cryotron Devices
One of the most challenging obstacles to realizing exascale computing is
minimizing the energy consumption of L2 cache, main memory, and interconnects
to that memory. For promising cryogenic computing schemes utilizing Josephson
junction superconducting logic, this obstacle is exacerbated by the cryogenic
system requirements that expose the technology's lack of high-density,
high-speed and power-efficient memory. Here we demonstrate an array of
cryogenic memory cells consisting of a non-volatile three-terminal magnetic
tunnel junction element driven by the spin Hall effect, combined with a
superconducting heater-cryotron bit-select element. The write energy of these
memory elements is roughly 8 pJ with a bit-select element, designed to achieve
a minimum overhead power consumption of about 30%. Individual magnetic memory
cells measured at 4 K show reliable switching with write error rates below
, and a 4x4 array can be fully addressed with bit select error rates
of . This demonstration is a first step towards a full cryogenic
memory architecture targeting energy and performance specifications appropriate
for applications in superconducting high performance and quantum computing
control systems, which require significant memory resources operating at 4 K.Comment: 10 pages, 6 figures, submitte
Kinetic plasma-wall interaction using immersed boundary conditions
The interaction between a plasma and a solid surface is studied in a (1D-1V) kinetic approach using immersed boundary conditions and penalization to model the wall. Two solutions for the penalized wall region are investigated that either allow currents to flow within the material boundary or not. Essential kinetic aspects of sheath physics are recovered in both cases and their parametric dependencies investigated. Importantly, we show how the two approaches can be reconciled when accounting for relevant kinetic effects. Non-Maxwellian features of the ion and electron distribution functions are essential to capture the value of the potential drop in the sheath. These features lead to a sheath heat transmission factor for ions 60\% larger than usually predicted. The role of collisions is discussed and means of incorporating minimally-relevant kinetic sheath physics in the gyrokinetic framework are discussed
Kinetic plasma-sheath self-organization
The interaction between a plasma and a solid surface is studied in a (1D-1V) kinetic framework using a localized particle and convective energy source. Matching the quasineutral plasma region and sheath horizon is addressed in the fluid framework with a zero heat flux closure. It highlights non-polytropic nature of the physics of parallel transport. Shortfalls of this approach compared to a reference kinetic simulation highlight the importance of the heat flux as the measure of kinetic effects. Non-collisional closure and higher moment closure are used to determine the sound velocity. Within these frameworks, no gain in the fluid predictive capability is obtained. The kinetic constraint at the sheath horizon is discussed and modified to account for conditions that are actually met in simulations, namely quasineutrality with a small but finite charge density. Analyzing the distribution functions shows that collisional transfer is mandatory to achieve steady-state self-organization on the open field lines